Goto

Collaborating Authors

 power level


JaGuard: Jamming Correction of GNSS Deviation with Deep Temporal Graphs

Kesić, Ivana, Blatnik, Aljaž, Fortuna, Carolina, Bertalanič, Blaž

arXiv.org Artificial Intelligence

Abstract--Global Navigation Satellite Systems (GNSS) face growing disruption from intentional jamming, undermining availability exactly when reliable positioning and timing are essential. We tackle this challenge by recasting jamming mitigation as a dynamic graph regression problem and propose a Jamming Guardian (JaGuard), a new receiver-centric deep temporal graph network-based method that estimates, and thereby corrects, the receiver's latitude and longitude errors. At each 1 Hz epoch, we model the satellite-receiver scene as a heterogeneous star graph with the receiver as the center node and the tracked satellites as leaves. These satellites have time-varying attributes such as SNR, azimuth, elevation, and latitude/longitude. A single-layer Heterogeneous Graph ConvLSTM (HeteroGCLSTM) fuses one-hop spatial context with short-term temporal dynamics to produce a 2D deviation vector for error mitigation. We evaluate our approach on datasets collected from physical hardware (two different commercial receivers), subjected to controlled conducted RF interference. Interference is introduced with three jammer types: Continuous Wave CW, multi-tone 3 CW, and wideband FM. Each jammer type was exercised at six power levels from 45 to 70 dBm, with 50 repetitions per scenario, including pre-jam, jam, and recovery phases. Compared to strong multivariate time series baselines (TSMixer MLP, uniform CNN, and Seq2Point CNN), our model consistently yields the lowest Mean Absolute Error (MAE) in positional deviation. Under severe jamming at 45 dBm, it achieves an MAE of 3.64-7.74 On mixed-mode datasets that pool all power levels, the MAE is 3.78 cm for GP01 and 4.25 cm for U-blox 10, surpassing Seq2Point, TSMixer, and uniform CNN. A data-efficiency split further shows that with only 10% of the training data, our approach remains clearly ahead, achieving an MAE of about 20 cm versus 36-42 cm for the baselines. Global Navigation Satellite Systems (GNSS) underpin nearly every critical infrastructure, from telecommunications [1] and aviation safety [2], power-grid synchronization [3], emerging drone ecosystems where location privacy and integrity are paramount [4], to autonomous driving [5].


A Reinforcement Learning Framework for Resource Allocation in Uplink Carrier Aggregation in the Presence of Self Interference

Bodempudi, Jaswanth, Sairam, Batta Siva, Haritha, Madepalli, Mattu, Sandesh Rao, Chockalingam, Ananthanarayanan

arXiv.org Artificial Intelligence

Carrier aggregation (CA) is a technique that allows mobile networks to combine multiple carriers to increase user data rate. On the uplink, for power constrained users, this translates to the need for an efficient resource allocation scheme, where each user distributes its available power among its assigned uplink carriers. Choosing a good set of carriers and allocating appropriate power on the carriers is important. If the carrier allocation on the uplink is such that a harmonic of a user's uplink carrier falls on the downlink frequency of that user, it leads to a self coupling-induced sensitivity degradation of that user's downlink receiver. In this paper, we model the uplink carrier aggregation problem as an optimal resource allocation problem with the associated constraints of non-linearities induced self interference (SI). This involves optimization over a discrete variable (which carriers need to be turned on) and a continuous variable (what power needs to be allocated on the selected carriers) in dynamic environments, a problem which is hard to solve using traditional methods owing to the mixed nature of the optimization variables and the additional need to consider the SI constraint. We adopt a reinforcement learning (RL) framework involving a compound-action actor-critic (CA2C) algorithm for the uplink carrier aggregation problem. We propose a novel reward function that is critical for enabling the proposed CA2C algorithm to efficiently handle SI. The CA2C algorithm along with the proposed reward function learns to assign and activate suitable carriers in an online fashion. Numerical results demonstrate that the proposed RL based scheme is able to achieve higher sum throughputs compared to naive schemes. The results also demonstrate that the proposed reward function allows the CA2C algorithm to adapt the optimization both in the presence and absence of SI.


Joint Active RIS Configuration and User Power Control for Localization: A Neuroevolution-Based Approach

Stamatelis, George, Chen, Hui, Wymeersch, Henk, Alexandropoulos, George C.

arXiv.org Artificial Intelligence

This paper studies user localization aided by a Reconfigurable Intelligent Surface (RIS). A feedback link from the Base Station (BS) to the user is adopted to enable dynamic power control of the user pilot transmissions in the uplink. A novel multi-agent algorithm for the joint control of the RIS phase configuration and the user transmit power is presented, which is based on a hybrid approach integrating NeuroEvolution (NE) and supervised learning. The proposed scheme requires only single-bit feedback messages for the uplink power control, supports RIS elements with discrete responses, and is numerically shown to outperform fingerprinting, deep reinforcement learning baselines and backpropagation-based position estimators.


Industrial Energy Disaggregation with Digital Twin-generated Dataset and Efficient Data Augmentation

Internò, Christian, Castellani, Andrea, Schmitt, Sebastian, Stella, Fabio, Hammer, Barbara

arXiv.org Artificial Intelligence

Abstract--Industrial Non-Intrusive Load Monitoring (NILM) is limited by the scarcity of high-quality datasets and the complex variability of industrial energy consumption patterns. T o address data scarcity and privacy issues, we introduce the Synthetic Industrial Dataset for Energy Disaggregation (SIDED), an open-source dataset generated using Digital Twin simulations. SIDED includes three types of industrial facilities across three different geographic locations, capturing diverse appliance behaviors, weather conditions, and load profiles. We also propose the Appliance-Modulated Data Augmentation (AMDA) method, a computationally efficient technique that enhances NILM model generalization by intelligently scaling appliance power contributions based on their relative impact. We show in experiments that NILM models trained with AMDA-augmented data significantly improve the disaggregation of energy consumption of complex industrial appliances like combined heat and power systems. Specifically, in our out-of-sample scenarios, models trained with AMDA achieved a Normalized Disaggregation Error of 0.167, outperforming models trained without data augmentation (0.451) and those trained with state-of-the-art data augmentation methods (0.290). Data distribution analyses confirm that AMDA effectively aligns training and test data distributions, enhancing model generalization. NERGY management has become increasingly important due to the undeniable reality of climate change and the rising global energy demand [1]. The industrial sector plays a significant role in international energy optimization [2], [3], necessitating heightened awareness of energy consumption to enhance efficiency and sustainability. C. Intern ` o and B. Hammer are with the Machine Learning Group, Center for Cognitive Interaction Technology (CITEC), University of Bielefeld, Bielefeld, Germany. C. Intern ` o, A. Castellani and S. Schmitt are with the Honda Research Institute EU, Offenbach am Main, Germany. F. Stella is with the Models and Algorithms for Data and Text Mining Laboratory (MADLab), Department of Informatics, Systems and Communication (DISCo), University of Milano - Bicocca, Milan, Italy.


Meta-Reinforcement Learning for Fast and Data-Efficient Spectrum Allocation in Dynamic Wireless Networks

Giwa, Oluwaseyi, Awodunmila, Tobi, Mohsin, Muhammad Ahmed, Bilal, Ahsan, Jamshed, Muhammad Ali

arXiv.org Artificial Intelligence

The dynamic allocation of spectrum in 5G / 6G networks is critical to efficient resource utilization. However, applying traditional deep reinforcement learning (DRL) is often infeasible due to its immense sample complexity and the safety risks associated with unguided exploration, which can cause severe network interference. To address these challenges, we propose a meta-learning framework that enables agents to learn a robust initial policy and rapidly adapt to new wireless scenarios with minimal data. We implement three meta-learning architectures, model-agnostic meta-learning (MAML), recurrent neural network (RNN), and an attention-enhanced RNN, and evaluate them against a non-meta-learning DRL algorithm, proximal policy optimization (PPO) baseline, in a simulated dynamic integrated access/backhaul (IAB) environment. Our results show a clear performance gap. The attention-based meta-learning agent reaches a peak mean network throughput of 48 Mbps, while the PPO baseline decreased drastically to 10 Mbps. Furthermore, our method reduces SINR and latency violations by more than 50% compared to PPO. It also shows quick adaptation, with a fairness index 0.7, showing better resource allocation. This work proves that meta-learning is a very effective and safer option for intelligent control in complex wireless systems.


MILAAP: Mobile Link Allocation via Attention-based Prediction

Chen, Yung-Fu, Arora, Anish

arXiv.org Artificial Intelligence

Channel hopping (CS) communication systems must adapt to interference changes in the wireless network and to node mobility for maintaining throughput efficiency. Optimal scheduling requires up-to-date network state information (i.e., of channel occupancy) to select non-overlapping channels for links in interference regions. However, state sharing among nodes introduces significant communication overhead, especially as network size or node mobility scale, thereby decreasing throughput efficiency of already capacity-limited networks. In this paper, we eschew state sharing while adapting the CS schedule based on a learning-based channel occupancy prediction. We propose the MiLAAP attention-based prediction framework for machine learning models of spectral, spatial, and temporal dependencies among network nodes. MiLAAP uses a self-attention mechanism that lets each node capture the temporospectral CS pattern in its interference region and accordingly predict the channel occupancy state within that region. Notably, the prediction relies only on locally and passively observed channel activities, and thus introduces no communication overhead. To deal with node mobility, MiLAAP also uses a multi-head self-attention mechanism that lets each node locally capture the spatiotemporal dependencies on other network nodes that can interfere with it and accordingly predict the motion trajectory of those nodes. Detecting nodes that enter or move outside the interference region is used to further improve the prediction accuracy of channel occupancy. We show that for dynamic networks that use local CS sequences to support relatively long-lived flow traffics, the channel state prediction accuracy of MiLAAP is remarkably ~100% across different node mobility patterns and it achieves zero-shot generalizability across different periods of CS sequences.


A fast sound power prediction tool for genset noise using machine learning

Pargal, Saurabh, Sane, Abhijit A.

arXiv.org Artificial Intelligence

This paper investigates the application of machine learning regression algorithms Kernel Ridge Regression (KRR), Huber Regressor (HR), and Gaussian Process Regression (GPR) for predicting sound power levels of gensets, offering significant value for marketing and sales teams during the early bidding process. When engine sizes and genset enclosure dimensions are tentative, and measured noise data is unavailable, these algorithms enable reliable noise level estimation for unbuilt gensets. The study utilizes high fidelity datasets from over 100 experiments conducted at Cummins Acoustics Technology Center (ATC) in a hemi-anechoic chamber, adhering to ISO 3744 standards. By using readily available information from the bidding and initial design stages, KRR predicts sound power with an average accuracy of within 5 dBA. While HR and GPR show slightly higher prediction errors, all models effectively capture the overall noise trends across various genset configurations. These findings present a promising method for early-stage noise estimation in genset design.


Advanced Predictive Quality Assessment for Ultrasonic Additive Manufacturing with Deep Learning Model

Poudel, Lokendra, Jha, Sushant, Meeker, Ryan, Phan, Duy-Nhat, Bhowmik, Rahul

arXiv.org Artificial Intelligence

Ultrasonic Additive Manufacturing (UAM) employs ultrasonic welding to bond similar or dissimilar metal foils to a substrate, resulting in solid, consolidated metal components. However, certain processing conditions can lead to inter-layer defects, affecting the final product's quality. This study develops a method to monitor in-process quality using deep learning-based convolutional neural networks (CNNs). The CNN models were evaluated on their ability to classify samples with and without embedded thermocouples across five power levels (300W, 600W, 900W, 1200W, 1500W) using thermal images with supervised labeling. Four distinct CNN classification models were created for different scenarios including without (baseline) and with thermocouples, only without thermocouples across power levels, only with thermocouples across power levels, and combined without and with thermocouples across power levels. The models achieved 98.29% accuracy on combined baseline and thermocouple images, 97.10% for baseline images across power levels, 97.43% for thermocouple images, and 97.27% for both types across power levels. The high accuracy, above 97%, demonstrates the system's effectiveness in identifying and classifying conditions within the UAM process, providing a reliable tool for quality assurance and process control in manufacturing environments. Key Words: Machine Learning, Convolution Neural Network, Image Analysis, Ultrasonic Additive Manufacturing, In situ Monitoring, Anomaly Detection 1.0 Introduction Additive manufacturing (AM) refers to a set of computer-controlled techniques that create threedimensional objects by layering materials (Ansari et al., 2022; Saimon et al., 2024). Ultrasonic additive manufacturing (UAM) is a standout solid-state manufacturing method within this group, producing nearly finished metal parts without melting the materials.


NaviSlim: Adaptive Context-Aware Navigation and Sensing via Dynamic Slimmable Networks

Johnsen, Tim, Levorato, Marco

arXiv.org Artificial Intelligence

Small-scale autonomous airborne vehicles, such as micro-drones, are expected to be a central component of a broad spectrum of applications ranging from exploration to surveillance and delivery. This class of vehicles is characterized by severe constraints in computing power and energy reservoir, which impairs their ability to support the complex state-of-the-art neural models needed for autonomous operations. The main contribution of this paper is a new class of neural navigation models -- NaviSlim -- capable of adapting the amount of resources spent on computing and sensing in response to the current context (i.e., difficulty of the environment, current trajectory, and navigation goals). Specifically, NaviSlim is designed as a gated slimmable neural network architecture that, different from existing slimmable networks, can dynamically select a slimming factor to autonomously scale model complexity, which consequently optimizes execution time and energy consumption. Moreover, different from existing sensor fusion approaches, NaviSlim can dynamically select power levels of onboard sensors to autonomously reduce power and time spent during sensor acquisition, without the need to switch between different neural networks. By means of extensive training and testing on the robust simulation environment Microsoft AirSim, we evaluate our NaviSlim models on scenarios with varying difficulty and a test set that showed a dynamic reduced model complexity on average between 57-92%, and between 61-80% sensor utilization, as compared to static neural networks designed to match computing and sensing of that required by the most difficult scenario.


Algorithm for AGC index management against crowded radio environment

Joly, Morgane, Rivière, Fabian, Renault, Éric

arXiv.org Artificial Intelligence

Connected devices are part of everyday life. The proliferation of connected portable devices such as mobile phones, laptop, smart watches, tablets, or non-portable connected devices such as TV, video game console saturates the environment with RF signals. In parallel to the reception of desired data from its communication partner(s), such connected devices receive also unwanted signals, so called interferers. The interferers, especially from Wi-Fi signals, can occur in a random manner in the form of a signal burst of variable duration and have a signal strength possibly much higher than the desired signal. Interferers with a high signal strength can cause saturation of the receiver preventing proper reception of the desired data. Some techniques tackle this issue by continuously monitoring the received signal strength and adjust immediately the receiver gain to avoid saturation whilst still maintaining the highest sensitivity level. However, when operating popular wireless communication protocols such as Wireless PAN (Bluetooth, BLE, Zigbee...), the receiver is not allowed to adjust the gain during the data payload. RF receivers for these communication protocols adjust then the gain during a time interval prior to the payload reception based on the real-time received signal and freeze the gain just before switching to the payload reception period. This is illustrated in figure 1. Due to the random nature in occurrence and strength level, interferers may appear during the data payload, receiver may saturate causing data loss.